Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix test #142

Merged
merged 27 commits into from
Nov 4, 2024
Merged

Fix test #142

merged 27 commits into from
Nov 4, 2024

Conversation

kp-cat
Copy link
Contributor

@kp-cat kp-cat commented Sep 24, 2024

Describe the purpose of your pull request

Clear the ConfigCat Registry before every testrun to fix broken async integration tests:

warn [3000] There is an existing ConfigCat instance for the specified SDK Key. No new instance will be created and the specified options are ignored. You can use the existing instance by passing client: 172e9daa-266f-48bb-bf32-8c4c4c2c2797 to the ConfigCat API functions. SDK Key: 'configcat-sdk-1/PKDVCLf-Hq-h-kCzMp-L7Q/1cGEJXUwYUGZCBOL-E2sOw'.

Requirement checklist (only if applicable)

  • I have covered the applied changes with automated tests.
  • I have executed the full automated test set against my changes.
  • I have validated my changes against all supported platform versions.
  • I have read and accepted the contribution agreement.

@kp-cat kp-cat requested a review from a team as a code owner September 24, 2024 12:35
Copy link
Collaborator

@randycoulman randycoulman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems OK, but I'm not sure the registry clearing code is doing what you intend here. See my note below.

Based on my understanding of how Registry and start_supervised work, the only required change here seems to be making ConfigCatTest be async: false. I think the rest of the changes could safely be reverted, including the combining of two separate test files.

But you may be seeing something I'm not.

Comment on lines 199 to 205
defp clear_registry(_context) do
ConfigCat.Registry
|> Registry.match({__MODULE__, :_}, :_)
|> Enum.each(fn {key, _} -> Registry.unregister(ConfigCat.Registry, key) end)

:ok
end
Copy link
Collaborator

@randycoulman randycoulman Sep 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this part strictly necessary? Processes should remove themselves from the Registry when they shut down, and the use of start_supervised! ensures that processes are shut down when the test ends.

If it is necessary, are you sure that this code is doing what you want? __MODULE__ in the match spec will refer to this test's module (ConfigCat.IntegrationTest), but all of the started processes will have the module as ConfigCat.Supervisor, if I'm remembering correctly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My first attempt was to simply turn off async test execution in ConfigCatTest. But on GitHub I still got the error. After that, I implemented the clear_registry function to ensure that the previous ConfigCat client instance is removed from the Registry. If I remember correctly, even with these changes, I occasionally encountered the error (though less frequently). Finally, I combined the two separate tests, and the error disappeared.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure that clear_registry isn't doing anything (because __MODULE__ is different in the test than it is in the code), but if this is working for you, feel free to merge it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right @randycoulman, I also got the error with the combined version (less frequently, but I think it is just because of the timing).

How should I use start_supervised! to ensure the process is shut down when the test ends?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can see, we're already using start_supervised in start_config_cat, which is what every test calls. Clearly something isn't shutting down fast enough, though. I have a different idea to try. I'll open up a draft PR once I have it working.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kp-cat See #143 for an alternate solution for this test problem.

If we find that the tests continue spuriously failing, we could change our CI config to run mix coveralls.json --warnings-as-errors --max-cases 1. That would force the tests to all run serially. That'll make the build a bit slower, but should make the problem go away.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @randycoulman,
I would avoid extra dependencies just because of this issue. I prefer the --max-cases 1 solution. I changed the CI config slightly to run the integration tests separately. What do you think about these changes here?

@kp-cat kp-cat force-pushed the fix-test branch 4 times, most recently from e5c2b70 to 6fdaad6 Compare October 30, 2024 13:50
Copy link
Collaborator

@randycoulman randycoulman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned below, --max-cases 1 isn't doing anything different than what ExUnit does on its own. What happens when you run the entire test suite with --max-cases 1 instead? If that works, it would simplify the changes you had to make to merge the coverage information.

That said, if this separation is actually solving the problem, you should go ahead and merge it.

@@ -58,7 +58,9 @@ jobs:
run: mix dialyzer

- name: Execute tests
run: mix coveralls.json --warnings-as-errors
run: |
mix test --only integration --warnings-as-errors --max-cases 1 --cover --export-coverage integration-coverage
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--max-cases 1 has no effect here. ExUnit will run async: true modules in parallel with each other, but runs the tests within each module in series. Since there is only one module (aka case) with the integration tag, there will only be one case being run no matter what.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You right but If I remove --max-cases 1, it seems the integration test can be broken again 😒
I'll just keep it like this.

@kp-cat kp-cat merged commit abb9eb6 into main Nov 4, 2024
7 checks passed
@kp-cat kp-cat deleted the fix-test branch November 4, 2024 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants